1,279 research outputs found

    Fisher Lecture: Dimension Reduction in Regression

    Full text link
    Beginning with a discussion of R. A. Fisher's early written remarks that relate to dimension reduction, this article revisits principal components as a reductive method in regression, develops several model-based extensions and ends with descriptions of general approaches to model-based and model-free dimension reduction in regression. It is argued that the role for principal components and related methodology may be broader than previously seen and that the common practice of conditioning on observed values of the predictors may unnecessarily limit the choice of regression methodology.Comment: This paper commented in: [arXiv:0708.3776], [arXiv:0708.3777], [arXiv:0708.3779]. Rejoinder in [arXiv:0708.3781]. Published at http://dx.doi.org/10.1214/088342306000000682 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Testing predictor contributions in sufficient dimension reduction

    Full text link
    We develop tests of the hypothesis of no effect for selected predictors in regression, without assuming a model for the conditional distribution of the response given the predictors. Predictor effects need not be limited to the mean function and smoothing is not required. The general approach is based on sufficient dimension reduction, the idea being to replace the predictor vector with a lower-dimensional version without loss of information on the regression. Methodology using sliced inverse regression is developed in detail

    Rejoinder: Fisher Lecture: Dimension Reduction in Regression

    Full text link
    Rejoinder: Fisher Lecture: Dimension Reduction in Regression [arXiv:0708.3774]Comment: Published at http://dx.doi.org/10.1214/088342307000000078 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Algorithms for envelope estimation

    Full text link
    Envelopes were recently proposed as methods for reducing estimative variation in multivariate linear regression. Estimation of an envelope usually involves optimization over Grassmann manifolds. We propose a fast and widely applicable one-dimensional (1D) algorithm for estimating an envelope in general. We reveal an important structural property of envelopes that facilitates our algorithm, and we prove both Fisher consistency and root-n-consistency of the algorithm.Comment: 30 pages, 2 figures, 2 table

    Determining the dimension of iterative Hessian transformation

    Full text link
    The central mean subspace (CMS) and iterative Hessian transformation (IHT) have been introduced recently for dimension reduction when the conditional mean is of interest. Suppose that X is a vector-valued predictor and Y is a scalar response. The basic problem is to find a lower-dimensional predictor \eta^TX such that E(Y|X)=E(Y|\eta^TX). The CMS defines the inferential object for this problem and IHT provides an estimating procedure. Compared with other methods, IHT requires fewer assumptions and has been shown to perform well when the additional assumptions required by those methods fail. In this paper we give an asymptotic analysis of IHT and provide stepwise asymptotic hypothesis tests to determine the dimension of the CMS, as estimated by IHT. Here, the original IHT method has been modified to be invariant under location and scale transformations. To provide empirical support for our asymptotic results, we will present a series of simulation studies. These agree well with the theory. The method is applied to analyze an ozone data set.Comment: Published at http://dx.doi.org/10.1214/009053604000000661 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Principal Fitted Components for Dimension Reduction in Regression

    Full text link
    We provide a remedy for two concerns that have dogged the use of principal components in regression: (i) principal components are computed from the predictors alone and do not make apparent use of the response, and (ii) principal components are not invariant or equivariant under full rank linear transformation of the predictors. The development begins with principal fitted components [Cook, R. D. (2007). Fisher lecture: Dimension reduction in regression (with discussion). Statist. Sci. 22 1--26] and uses normal models for the inverse regression of the predictors on the response to gain reductive information for the forward regression of interest. This approach includes methodology for testing hypotheses about the number of components and about conditional independencies among the predictors.Comment: Published in at http://dx.doi.org/10.1214/08-STS275 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Elevated soil lead: Statistical modeling and apportionment of contributions from lead-based paint and leaded gasoline

    Get PDF
    While it is widely accepted that lead-based paint and leaded gasoline are primary sources of elevated concentrations of lead in residential soils, conclusions regarding their relative contributions are mixed and generally study specific. We develop a novel nonlinear regression for soil lead concentrations over time. It is argued that this methodology provides useful insights into the partitioning of the average soil lead concentration by source and time over large residential areas. The methodology is used to investigate soil lead concentrations from the 1987 Minnesota Lead Study and the 1990 National Lead Survey. Potential litigation issues are discussed briefly.Comment: Published at http://dx.doi.org/10.1214/07-AOAS112 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore